3-Statistics-Distribution-Markov Process

Markov process

Outcome or state probability can depend on outcome or state sequence {Markov process}. Events can depend on previous-event order. Transition graphs show event orders needed to reach all events.

Markov chain Monte Carlo method

Taking samples can find transition matrices, or using transition matrices can generate data points {Markov chain Monte Carlo method} (MCMC). MCMC includes jump MCMC, reversible-jump MCMC, and birth-death sampling.

Markov-modulated Poisson process

Hidden Markov models {Markov-modulated Poisson process} can have continuous time.

autocorrelation function

Functions {autocorrelation function} (ACF) can measure if process description needs extra dimensions or parameters.

autoregressive integrated moving average

An average {autoregressive integrated moving average} (ARIMA) can vary around means set by hidden Markov chains.

Bayesian information criterion

Minus two times Schwarz criterion {Bayesian information criterion} (BIC) approximates hidden-Markov-model number of states.

Bayesian model

Models {Bayesian model} can estimate finite-state Markov chains.

direct Gibbs sampler

Methods {direct Gibbs sampler, sampling} can sample states in hidden Markov chains.

EM algorithm

Algorithms {EM algorithm} can have E and M steps.

finite mixture model

Hidden Markov models {finite mixture model} can have equal transition-matrix rows.

forward-backward Gibbs sampler

Stochastic forward recursions and stochastic and non-stochastic backward recursions {forward-backward Gibbs sampler, sampling} can sample distribution.

forward-backward recursion

Recursion {forward-backward recursion, sampling} can adjust distribution sampling. It is similar to Kalman-filter prediction and smoothing steps in state-space model.

hidden Markov model

Finite-state Markov chain {hidden Markov model} (HMM) samples distribution.

model

Hidden Markov models are graphical models. Bayesian models are finite-state Markov chains.

purposes

Hidden Markov chains model signal processing, biology, genetics, ecology, image analysis, economics, and network security. Situations compare error-free or non-criminal distribution to error or criminal distribution.

transition

Hidden-distribution Markov chain has initial distribution and time-constant transition matrix.

calculations

Calculations include estimating parameters by recursion {forward-backward recursion, Markov} {forward-backward Gibbs sampler, Markov} {direct Gibbs sampler, Markov}, filling missing data, finding state-space size, preventing switching, assessing validity, and testing convergence by likelihood recursion.

hidden semi-Markov model

Models {hidden semi-Markov model} can maintain distribution states over times.

Langevin diffusion

In Monte Carlo simulations, if smaller particles surround particle, where will particle be at later time? Answer uses white noise {Langevin equation} {Langevin diffusion}: 6 * k * T * (lambda / m) * D(t), where lambda/m is friction coefficient, T is absolute temperature, k is elasticity, and D(t) is distance at time t. At much later time, distance = (k * T) / (m * lambda).

likelihood recursion

Recursion {likelihood recursion} can calculate over joint likelihood averaged over time, typically taking logarithm.

marginal estimation

Methods {marginal estimation} can estimate hidden Markov distribution and probability.

maximum a posteriori

Methods {maximum a posteriori estimation} (MAP) can estimate hidden Markov distribution and probability.

Metropolis-Hastings

Methods {Metropolis-Hastings sampler} can sample from multivariate normal distribution centered on current data point, depending on Hastings probability.

predictive distribution

Distributions {predictive distribution} can measure how well models predict actual data.

Schwarz criterion

Asymptotic approximations {Schwarz criterion} to logarithm of state-number probability can depend on likelihood maximized over transition matrix, sample size, and free-parameter number.

state-space model

Models {state-space model} can use Gaussian distributions for hidden Markov models.

Viterbi algorithm

Algorithms {Viterbi algorithm} can find most likely Bayesian trajectory, using forward-backward recursion with maximization, rather than averaging, and can estimate hidden Markov distribution.

Related Topics in Table of Contents

3-Statistics-Distribution

Drawings

Drawings

Contents and Indexes of Topics, Names, and Works

Outline of Knowledge Database Home Page

Contents

Glossary

Topic Index

Name Index

Works Index

Searching

Search Form

Database Information, Disclaimer, Privacy Statement, and Rights

Description of Outline of Knowledge Database

Notation

Disclaimer

Copyright Not Claimed

Privacy Statement

References and Bibliography

Consciousness Bibliography

Technical Information

Date Modified: 2022.0225